专利摘要:
system and method for recognizing items in media data and distributing information related thereto is a system and method for establishing the location and identity of individual items in photos or videos. once one or more items in the photo/video are identified and matched to images in a reference database, item locations are established and additional information related to the items is accessed. collectively, the position data and additional data are merged into electronic photos or videos and then provided to a user as a merged data stream. additional functionality related to these identified items may occur when these locations are "pointed", "clicked" or otherwise selected (eg, purchase an item, request information, select another video stream, play a game, share the item, rate, "like", and the like).
公开号:BR112013018142B1
申请号:R112013018142-7
申请日:2012-01-18
公开日:2021-06-15
发明作者:John McDevitt
申请人:Hsni, Llc;
IPC主号:
专利说明:

FUNDAMENTALS OF THE INVENTION
[001] With the continued development of portable media players, social networking services, wireless data transmission speeds, etc., individuals continue to be introduced to more and more image and video content. However, when an individual receives a digital photo or video stream, or the like, they may also want additional information about something in the content, such as an item, a person, a logo, or even a building or dot. of reference. For example, a video stream might include a scene filmed at the Statue of Liberty and the viewer might want to receive historical information about this landmark. In addition, a video stream might include a famous actress carrying a new designer's bag or a famous athlete using a cell phone, each of which might be of interest to a consumer who wants to know more information about the item, share it with a friend via a social networking website, or something like that, or even buy the item. In conventional systems, the viewer/consumer is unable to quickly transform their general interest in the particular item into the ability to obtain additional information or engage in an e-commerce shopping session related to the item of interest. SUMMARY OF THE INVENTION
[002] Consequently, a system is needed that recognizes individual items or sets of items (collectively items) in source content and accesses information regarding recognized items that can then be requested or automatically pushed to the end user to in order to facilitate an additional interaction related to the recognized item. Therefore, the system and method described in this document refer to determining both the location and identity of items in images (both images and videos) and the rendering of additional functionality for these identified items when the end user "points", "click", or otherwise select the identified items.
[003] Specifically, a system is provided that includes an electronic database that stores a plurality of digital images of items and information related to each among the plurality of items; and a processor that digitizes source content having a plurality of elements and identifies any items that match the plurality of items stored in the database. In addition, the processor generates position data that indicates the position of the identified item and links and/or merges the item to information related to the identified item(s) and position data. In addition, a method is provided that digitizes the source content, identifies items in the source content that correspond to a digital image stored in an electronic database, generates position data that indicates the position of the identified item, accesses related information to the identified item, and links and/or merges the item to the position data and information related to the identified item. BRIEF DESCRIPTION OF THE DRAWINGS
[004] Figure 1 illustrates a block diagram of a system for recognizing items in media data and distributing related information according to an exemplary modality.
[005] Figure 2 illustrates a flowchart of a method for recognizing items in media data and distributing related information according to an exemplary modality.
[006] DETAILED DESCRIPTION OF THE INVENTION
[007] The following detailed description describes the possible modalities of the proposed system and method described in this document for exemplary purposes. The system and method are in no way limited to any specific combination of hardware and software. As will be described below, the system and method described herein refers to establishing both the location and identity of individual items in images. Once one or more items in the images and/or video are identified and the locations of the items established, additional functionality related to these identified items may occur when these identified locations are "pointed", "clicked" or otherwise selected (for example, buy an item, request information, select another video stream, play a game, share the item, rate, "Like" and the like).
[008] Figure 1 illustrates a block diagram of a system 100 for recognizing items in media data and distributing related information according to an exemplary modality. In general, system 100 is divided into remote processing system 102 and user location 104. In the exemplary embodiment, remote processing system 102 may be associated with a secondary processing system (e.g., a digital video recorder, a product provider, etc.), which can be located at the remote processing system 102 or at the location of the user 104, and/or at the content provider that is capable of processing the data transmitted from the location of the user 104. A general illustration of the relationship between a user's location, a product delivery server, i.e., a secondary processing system, and a content provider is discussed in US Patent No. 7,752,083 by Johnson et al, granted on 6th July 2010, and entitled "METHOD AND SYSTEM FOR IMPROVED INTERACTIVE TELEVISION PROCESSING," and is incorporated herein in its entirety by reference. Additionally, the location of user 104 can be considered any location where an end user/consumer is able to view an image and/or a video stream on a display device 145. It is noted that the terms "end user," " user" and "consumer" are used interchangeably in this document and may be a human or other system, as will be described in greater detail later.
[009] As shown in Figure 1, the remote processing system 102 includes a content source 110 that provides the source images, i.e., source content, which is ultimately transmitted to the user after being processed by the other components of remote processing system 102, as will be discussed below.
[010] In one embodiment, content source 110 may be a content provider such as the one discussed above with reference to U.S. Patent No. 7,752,083. Additionally, the source content can be live or pre-recorded, analog or digital, and still (photo) or streaming (video).
[011] The remote processing system 102 further includes a content reference database 115 that contains a plurality of known images (photo or video — collectively images). In particular, the content reference database 115 can store images referring to the elements that can be displayed in the source content. For example, stored images may refer to consumer products (eg, electronics, apparel, jewelry, etc.), marketing or branded items (eg, logos, brands, etc.), individuals, locations (eg. , buildings, landmarks, etc.), humanly invisible items (fingerprints, watermarks, etc.) or any other elements that are capable of being identified in the source content. Image data in reference content database 115 may be updated on a continuous or periodic basis by a system, a system administrator, or the like.
[012] The remote processing system 102 further includes a mail processor 120 which is coupled to both the content source 110 and the content reference database 115. The mail processor 120 is configured to compare images in the bank of reference content data 115 with elements in the source content provided by the content source 110. More particularly, the correspondence processor 120 uses conventional digitizing and image recognition algorithms to scan the image content to compare the elements in the content of origin with the images stored in the reference content database 115 and identify the matches. The process of scanning and related correspondence can take place on an ongoing or periodic basis. During the matching process, each potential item in the source content is compared to images stored in the reference content database 115. When the matching results in a match, the match processor 120 identifies the matched item. If there are no matches, the match processor 120 continues to scan the source content as it updates/changes to continuously or periodically check whether elements in the source content match the images in the reference content database 115. It is important to assess that areas of the source content that do not have items identified in them can be identified as such.
[013] It is further contemplated that the reference content database 115 can store certain images as predetermined marker items. Specifically, the content reference database 115 can store images with predefined identification data (e.g., marker features) that allow the mail processor 120 to more quickly and more accurately identify items that match the marker features. . Preferably, it is contemplated that items frequently being displayed in the source content are stored as predetermined bookmark items in the reference content database 115 so that the reference content database 115 is organized to contain subsets of items ( associated by marker characteristics) that have a higher probability of successful matching with elements in the specific source content. For example, a subset of items that are most likely to be matched during a sporting event (eg team logo) can be generated and referenced during the digitization process when the source content is a game involving the specific team having such a logo. As a result, the subset of items can be employed to increase the quality of item matches (increased correct matches and reduced false-positive matches), effectively reducing the processing requirements of the mail processor 120. Furthermore, in a process modality items stored in the reference content database 115 may include data fields that link similar items. For example, data fields can be provided that link items similar in type, time, relation, or similar (for example, all television images have a common field, images of things that occur around an event, such as like Valentine's Day has a field in common, or items that are traditionally linked have a field in common, such as salt and pepper. In addition, the match processor 120 can perform an iterative process to match the element in the content of origin to the item stored in the reference content database 115 by making an initial predicted match on the first image or frame and then refining the prediction for each subsequent scan until a conclusive match is made and the item is identified.
[014] As shown, the location determination processor 125 is coupled to the correspondence processor 120 and is configured to identify the location of any matched items identified by the correspondence processor 120. In the exemplary mode, the location of the matched items can be defined on a Cartesian coordinate plane, or at a position based on another location system (collectively X, Y coordinates as an individual point or as a set of points). The location determination processor 125 is configured to generate metadata by adjusting the X,Y coordinates for each position of the matched item relative to the source content as a whole. Correspondingly, for each position of the matched item, the determination processor 125 generates metadata for the specific X, Y coordinates of that seen item as it is positioned within the source content image that includes that item. For each subsequent image (including each video frame), location determination processor 125 continues to track the item's movement as its position varies in the source content and continues to generate metadata corresponding to the item's position. In the exemplar mode, the item's position can be denoted by any set of X, Y coordinates or by the center point of the item's shape.
[015] It should be understood by those of skill in the art that although correspondence processor 120 and location determination processor 125 are described as separate processors, in an alternative embodiment, a single processor may perform both correspondence and processing processes. location identification processes, as well as the creation of item identity and location metadata.
[016] The remote processing system 102 further includes a database of additional information 130. Although the database of additional information 130 is described in exemplary embodiment as being located in the remote processing system 102, the database of additional information 130 may also be located at a user location 104, as will be described in greater detail later.
[017] In any embodiment, the additional information database 130 contains additional information about the reference images stored in the reference content database 115. Specifically, the additional information database 130 is configured to store descriptive and relational information related to the item, including pricing information, size information, product description, product reviews, and the like, as well as links to other information sources such as web sites on the Internet. Thus, in operation, once the matched item is identified, the remote processing system 102 subsequently accesses the additional information database 130, which identifies all additional information related to the specific matched item. It should be appreciated that there may not be additional information in the database of additional information 130 related to items. In a refinement of the exemplar modality, the additional information can be a data trajectory to more detailed information about an item. Thus, rather than initially providing all the additional information related to the item, the additional information initially accessed by the additional information database 130 can be a pathway to this information. Therefore, only when the user is interested in the matched item and wants to see additional information about the item, the additional information database 130 will subsequently access the metadata related to the detailed information of the matched item.
[018] It should also be appreciated by individuals versed in the technique that although the reference content database 115 and the additional information database 130 are described as separate databases, in an alternative modality, a single database can be provided to store both image information and additional information about the referenced item.
[019] Once the additional information is identified by the additional information database 130, the merge processor 135 is provided to merge this metadata, the metadata referring to the location information calculated by the location determination processor 125 , and the source content provided by the content source 110 in a format that can be received/interpreted by the display device 145 at a user location 104. In the exemplary embodiment in which the source content is being generated live or is pre - recorded, the matching is taking place so that the content and the item identification and location metadata are synchronously distributed. In a further embodiment, the content with the related synchronous identification and location metadata items can be stored and terminated by the delivery server 140 to the display device 145. The rendering of this combined data can be visible or invisible as a whole or in part. At this point, the remote processing system 102 is configured to make the items on the display device "active" through any method known to those skilled in the art, for example, they are "selectable" or "clickable" by the end user/consumer . Additionally, distribution server 140 is coupled to merge processor 135 and configured to transmit the new embedded video stream to user location 104 using any conventional data communication method (e.g., air broadcast, cable broadcast, Direct Broadcast via Satellite ("Direct Broadcast Satellite"), Telco, wifi, 3G/4G, IP enabled, etc.). It is also contemplated that in an alternative modality, the rendering process of the "active" item is performed by the display device 145.
[020] User location 104 comprises a display device 145 which is configured to receive image/video and audio content (e.g. IP data stream) and is capable of displaying an image/video transmission, and, more particularly, the new integrated video stream generated by the merge processor 135 and transmitted by the distribution server 140. It should be understood that the display device 145 may be any suitably suitable device capable of viewing the new integrated image/video stream , which includes, but is not limited to, a computer, a smartphone, a PDA, a laptop computer, a notebook computer, a television, a display device with a decoder-type processor (internal or external to the display device), a Blu-ray player, a video game console (internal or external to a television, or similar), a Tablet PC, or any other device (individually or as part of a system ma) that can receive, interpret, and render image/video content on a screen, as well as interpret related metadata, receive user input related to the merged content and metadata, display additional information in response to user input and/or send such user input to a local and/or remotely connected secondary system.
[021] Additionally, the display device 145 .with internal(s) or external(s) processor(s) is configured to allow a user to somewhat select the identified items and perform additional actions. This process can be a one-time process in the case of photos or it can be continuous in the case of video. In the exemplary mode, the user's selection of one or more identified items will result in additional information about the item being displayed to the user on the display device 145. Additionally or alternatively, the response from the user's selection can be sent to one or more secondary systems either on a continuous or periodic basis. The user can select the identified item using any applicable selection method, such as a mouse pointer, touch screen, or the like. Therefore, when the display device 145 displays the new integrated video stream that includes one or more "active" items, as discussed above, and the end user selects the particular active item, the user can view and/or access the additional information related to the matched item. As mentioned earlier, the end user can also be a system. For example, when the new integrated video stream is being interpreted by the display device 145, one or more items can be automatically identified and selected by the display device 145 (for example, an associated processor). For example, if a user is watching a free version of a movie, this mode contemplates that the display device processor 145 automatically identifies and selects one or more items causing information (eg, product advertisements) to be displayed to the user. Final. Alternatively, if the user pays to download and watch the movie, this feature can be turned off.
[022] It is also noted that in an alternative modality, the new integrated video stream generated by the merge processor 135 includes only metadata related to item identification and position. Specifically, in this mode, the additional information in the database of additional information 130 that is related to the identified item is not initially merged into the integrated video stream. Instead, the integrated video stream is streamed to the end user without the additional information. Only after the end user selects the identified item, a request is sent by the display device 145 to the additional information database 130 in the remote processing system 102, which accesses the additional information and transmits it back to the display device 145. In yet another embodiment, the additional information database 130 may be located at the user's location 104.
[023] In a refinement of the exemplary modality, an electronic purchase request can be transmitted back to the distribution server 140 when the user selects the identified item, which successively causes the remote processing system 102 to initiate an interaction of electronic purchase with the end user that allows the end user to critique and, if he decides, buy the selected item. Exemplary electronic purchasing systems and methods are described in US Patents. US. 7,752,083 and 7,756,758 and in U.S. Patent Publication No. 2010/0138875, the contents of all of which are incorporated herein by reference.
[024] In addition, one or more secondary systems 150 may be provided at the user's location 104 and coupled to the display device 145. These additional systems are additional processors that allow for a wide variety of functionality known to those skilled in the art (e.g. , including digital video recorders, email systems, social networking systems, etc.), but which can be interfaced via a connection to the display device 145.
[025] It is also noted that although the exemplary modality describes the new integrated video stream as a single data stream that includes the source content, the metadata referring to the additional information that is merged into the source content, and the metadata for the X, Y coordinates of the matched items, in an alternative modality, two separate data streams containing this information can be transmitted by the distribution server 140 to the user's location 104 and then merged by one or more (or connected to the) display device 145. For example, the source content can be transmitted as a first data stream using conventional transmission methods (eg, standard broadcast, DBS, cable distributed video, or the like) and the metadata about the matched items (ie additional information and position information) can be transmitted using data communication methods of IP (eg wifi, 3G/4G, IP enabled, and the like). In this embodiment, the merge processor 135 is located at the location of the user 104 and is coupled to the display device 145 to perform the same merge processing steps described above.
[026] It should be further understood that although the various components are described as being part of the remote processing system 102, there is no intention that these components be located in the same physical location. In an alternative embodiment, one or more of the processes can be performed by processors that are internal or external to the display device 145. For example, in an embodiment, source content that has not been processed by remote processing system 102 can be transmitted directly to the display device 145. When the user selects or clicks on a particular element in the source content, a location determination processor provided on the display device 145 can generate metadata that sets the X, Y coordinates for the selected item. This metadata can then be transmitted to remote processing system 102 where the selected element is compared to images in the reference content database 115 by the match processor 120. If a match is identified, processing this information as described above with respect to the other components of the remote processing system 102 is realized and a new integrated video stream is pushed back to the user which includes the additional information about the element initially selected by the user. Furthermore, although each of the components described in remote processing system 102 is provided with one or more specific functions, each component is in no way intended to be limited to these functions. For example, different components can provide different processing functions within the context of the invention and/or a single component can perform all the functions described above in relation to the exemplary embodiment.
[027] Finally, it should be understood that each of the aforementioned components of the remote processing system 102 and the user location 104 comprises all of the requirement hardware and software modules to enable communication between each of the other respective components. These hardware components can include conventional interfaces such as modems, network cards, and the like. These hardware components and software applications are known to those skilled in the art and have not been described in detail in order not to unnecessarily obscure the description of the invention. Furthermore, the program instructions for each of the components can be in any suitable form. In particular, some or all of the instructions may be provided in programs written in a self-described computer language, for example, Hypertext Markup Language (HTML), Extensible Markup Language (XML), or the like. The transmitted program instructions can be used in combination with other previously installed instructions, for example, to control a way of displaying data items described in a received program marking sheet.
[028] Figure 2 illustrates a flowchart for a method 200 that serves to recognize items in media data and distribute the related information according to an exemplary modality. The following method is described in relation to the components in Figure 1 and their associated functionality, as discussed above.
[029] As shown in Figure 2, initially, in step 205, the content source 110 in the remote processing system 102 generates a source photo or video that is provided to the correspondence processor 120. In step 210, the correspondence processor 120 uses scanning methods and/or other known image matching techniques to compare elements in the source content to item images in the reference content database 115. These images can include a wide variety of things. For example, stored images can be related to consumer products (eg, electronics, apparel, jewelry, etc.), marketing or branded items (eg, logos, brands, etc.), individuals, locations (eg. , constructions, landmarks, etc.), or any other elements that are capable of being identified in the source content. If no match is identified, remote processing system 102 does nothing and match processor 120 continues to digitize the source content provided by content source 110. Additionally, in a further embodiment, the content source data areas that do not contain any identified items can be identified as such.
[030] Alternatively, if the match processor 120 identifies a match between the element in the source content and the reference item images in the reference content database 115, method 200 proceeds to step 215 in which the position of the matched item is calculated by location determination processor 125. Specifically, in step 215, location determination processor 125 generates metadata that adjusts the X,Y coordinates for each position of the matched item. Next, at step 220, remote processing system 102 accesses the additional information database 130 to identify additional information related to the identified item. This information may include descriptive or relational information related to items that include pricing information, size information, product description, product reviews, and the like, as well as links to other information sources such as websites on the Internet, or, alternatively, a data path to this detailed information.
[031] Once the additional information is identified, the method proceeds to step 225 where the merge processor 135 merges to this additional information, the metadata related to the location information calculated by the location determination processor 125, and the content provided by the content source 110 in a format that can be received/interpreted by the display device 145 at the location of the user 104.
[032] In step 230, the new integrated video stream is then transmitted by the distribution server 140 to the user's location 104. Next, in step 235, when the display device 145 receives the new integrated video stream, the display device 145 renders visible or invisible indicators on the matched items making them "active," ie the matched items are rendered "selectable" or "clickable" by the end user/consumer and additional information related to the matched item can displayed on the display device 145 in response to the user's selection of the active item. As noted earlier, this step can also be performed by remote processing system 102. Finally, as an example, in step 240, if a particular item is selected by the user/consumer, remote processing system 102 will launch an electronic purchase interaction with the user/consumer allowing the user/consumer to criticize and, if he decides, purchase the selected item. As noted above, exemplary electronic purchasing systems and methods are disclosed in U.S. Patent Nos. 7,752,083 and 7,756,758 and in U.S. Patent Publication No. 2010/0138875.
[033] It should be understood that although method 200 comprises certain steps performed by components in remote processing system 102 and certain steps performed by components at user location 104, method 200 is in no way intended to be limited in this regard . For example, as described above, certain processes performed by components in remote processing system 102 in the exemplary embodiment may, in an alternative embodiment, be performed by processors coupled to display device 145. For example, in one embodiment, the source content it may be initially transmitted to the user/consumer at the location of the user 104 before being processed. Once the user selects a particular element, a processor coupled to the display device 145 can generate metadata representing the X, Y coordinate of the selected item in the source content and this metadata can then be transmitted back to the processing system remote 102. The subsequent processing steps discussed earlier (eg the image matching and merging processes) can then be performed on the selected item before the data is pushed back to the user/consumer.
[034] Additionally, it is contemplated that method 200 can be performed using digital or analog content, live or recorded, and immovable or streaming provided by the content source 110 where the metadata related to the product's identity and coordinates X, Y can be stored and distributed with live or recorded content or alternatively this data can be stored in remote processing system 102 (or a combination of remote processing system 102 and user location 104) and served or created dynamically as would be understood by a person skilled in the art. Additionally, in the modality where the source content is being generated live or is pre-recorded, the matching is taking place so that the content and the item identification and location metadata are distributed synchronously. In a further embodiment, content with the related synchronous item identification and location metadata can be stored and played directly by the distribution server 140 to the display device 145.
[035] Finally, it is noted that although system 100 above in Figure 1 and method 200 in Figure 2 have been primarily described in relation to image and video data, it is also contemplated that system 100 and method 200 may use audio data.
[036] For example, the reference content database 115 may contain audio items, such as songs or voices of famous individuals, that are capable of being identified in the source content. Match processor 120 can perform a similar matching process for the source content and match the audio elements in the source content to the audio items in the reference content database 115. The additional information database 130 is also may contain additional information about the identified audio items, such as the song album or movies, concerts, sports teams, political parties, etc. referring to famous individuals whose voice is identified. The end user can then select a designated area in the source content or otherwise indicate an interest in the audio item to receive the additional information using the system and process described in this document.
[037] Although the disclosure has been described in conjunction with exemplifying modalities, it is understood that the term "exemplifier" means merely an example. Accordingly, the order is intended to cover alternatives, modifications and equivalents, which may be included in the spirit and scope of the system and method for recognizing items in media data and distributing related information as disclosed herein.
[038] Additionally, in the above detailed description, several specific details have been presented in order to provide a complete understanding of the present invention. However, it should be apparent to individuals with knowledge of the technique that the system and method for recognizing items in media data and distributing related information can be practiced without these specific details. In other cases, well-known methods, procedures, components, and circuits have not been described in detail in order not to unnecessarily obscure aspects of the system and method disclosed herein.
权利要求:
Claims (21)
[0001]
1. SYSTEM FOR DYNAMICALLY RECOGNIZING INDIVIDUAL ITEMS IN IMAGES CONTAINED IN VIDEO SOURCE CONTENT, from a content source at the time of video source content reproduction and distributing related information, characterized by comprising: at least one database electronic storing a plurality of digital images and information related to each among the plurality of digital images; at least one processor communicatively coupled to at least one electronic database, the at least one processor configured to: (1) digitize at least one of the images contained in the video source content from the content source and dynamically compare elements in the at least one image with the plurality of digital images stored in the at least one electronic database to identify at least one individual item in the at least one image that matches at least one of a plurality of digital images stored in the at least one bank electronic data; (2) access information stored in the at least one electronic database that is related to at least one digital image that matches the at least one identified individual item; (3) generate coordinate position data indicating a position of at least one individual item identified in the video source contents, and (4) generate a new integrated video stream by merging the at least one image contained in the video source contents with the accessed information relating to the at least one identified individual item and the coordinate position data of the at least one identified individual item in the at least one image; and a server configured to transmit the new integrated video stream containing the merged data to a display device that displays the at least one image with the at least one electronic indicator, which is based on the coordinate position data, for the at least an identified individual item such that the at least one individual item identified in the image data is active in the image and configured to be selected by a user in order to view accessed information relating to the identified individual item.
[0002]
2. SYSTEM according to claim 1, characterized in that the images are part of a video transmission.
[0003]
3. SYSTEM according to claim 2, characterized in that the video transmission is live.
[0004]
4. SYSTEM according to claim 2, characterized in that the video transmission is pre-recorded.
[0005]
5. SYSTEM according to claim 1, characterized in that the images are part of at least one photo.
[0006]
6. SYSTEM according to claim 1, characterized in that the images comprise analog data.
[0007]
7. SYSTEM according to claim 1, characterized in that the images comprise digital data.
[0008]
The system of claim 1, characterized in that the at least one processor comprises: a first processor configured to digitize the at least one image contained in the video source contents and identify the at least one individual item in the at least one image which corresponds to one of the plurality of digital images stored in the at least one electronic database, a second processor configured to access information stored in the at least one electronic database which is related to at least one digital image corresponding to the hair. minus one individual item identified; a third processor configured to generate coordinate position data indicating the position of at least one individual item identified in the video source contents, and a fourth processor configured to generate a new integrated video stream by merging the at least one image contained in the content of video source with the accessed related information relating to at least one identified individual item and the coordinate position data of the at least one identified individual item.
[0009]
9. SYSTEM according to claim 1, characterized in that at least one electronic database comprises a first electronic database storing the plurality of digital images and a second electronic database storing the information related to each of the plurality of digital images.
[0010]
10. SYSTEM according to claim 1, characterized in that the processor is further configured to display the accessed information related to at least one identified individual item in response to a user selection of the at least one identified individual item that is selectable.
[0011]
11. SYSTEM according to claim 1, characterized in that the processor is further configured to update the coordinate position data indicating the position of at least one identified individual item.
[0012]
12. SYSTEM according to claim 1, characterized in that the display device is at least one of a computer, a smartphone, a tablet, a PDA, a television, a display device with a decoder-type processor, a Blu player -ray and a video game console.
[0013]
13. SYSTEM according to claim 1, characterized in that at least one identified individual item that is to be selectable is configured to be selected by at least one of the display devices or by a user of the display device.
[0014]
A SYSTEM according to claim 1, characterized in that the processor is further configured to digitize the at least one image contained in the video source contents and identify a plurality of individual items in the at least one image that correspond to a plurality of digital images data stored in at least one electronic database.
[0015]
15. METHOD FOR DYNAMICALLY RECOGNIZING INDIVIDUAL ITEMS IN IMAGES CONTAINED IN VIDEO SOURCE CONTENT, from a content source at the time of video source content reproduction and distributing related information, characterized by the method comprising: digitizing at least one among the images contained in the video source content from the content source; dynamically comparing individual elements in the at least one image contained in the video source content with a plurality of digital images stored in the at least one electronic database to identify the at least one individual item in the at least one image contained in the video source content. video corresponding to at least one of the plurality of digital images stored in the at least one electronic database; accessing information stored in the at least one electronic database that is related to at least one digital image that matches the at least one identified individual item; generating coordinate position data that indicates a position of at least one individual item identified in the video source contents; generate an integrated video stream by merging the at least one image contained in the video source content with the accessed information relating to the at least one identified individual item and the coordinate position data of the at least one individual item identified in the at least one contained image in video source content; and transmitting the new integrated video stream containing the merged data to a display device that displays the at least one image with at least one electronic indicator, which is based on the coordinate position data, for the at least one individual item identified such that the at least one individual item identified in the image is active in the image and configured to be selected by a user in order to view accessed information related to the identified individual item.
[0016]
16. METHOD according to claim 15, characterized in that it further comprises displaying information related to at least one individual item identified in response to a user selection of at least one individual item identified that is selectable.
[0017]
17. METHOD according to claim 15, characterized in that it further comprises updating the coordinate position data that indicate the position of at least one identified individual item.
[0018]
18. METHOD according to claim 15, characterized in that it further comprises selecting the at least one individual item by at least one of the display devices or by a user of the display device.
[0019]
19. The method of claim 15, further comprising digitizing the at least one image contained in the video source contents and identifying a plurality of individual items in the at least one image that corresponds to a plurality of respective digital images stored in at least one electronic database.
[0020]
20. METHOD FOR DYNAMICALLY RECOGNIZING INDIVIDUAL ITEMS IN IMAGES CONTAINED IN VIDEO SOURCE CONTENT, from a content source at the time of playback of video source content and distributing related information, characterized by the method understand: transmit to a device displaying the images contained in the video source contents from the content source, each image having at least one item; receiving from the coordinate of the display device the position data indicating the position of a selected item in the at least one image; dynamically compare the plurality of digital images stored in the at least one electronic database with individual elements of the selected item to identify at least one of a plurality of digital images stored in the at least one electronic database that matches the individual elements of the selected item ; accessing information stored in the at least one electronic database that is related to at least one identified digital image; merging the at least one image with the coordinate position data and accessed information relating to at least one identified digital image; and transmitting the merged data to the display device displaying the at least one image with at least one electronic indicator, which is based on the coordinate position data, for the selected individual item such that the selected item in the at least one image it is still selectable.
[0021]
21. METHOD, according to claim 20, characterized in that it further comprises displaying the information accessed related to the selected item in response to a user selection of the selected item that is still selectable.
类似技术:
公开号 | 公开日 | 专利标题
BR112013018142B1|2021-06-15|SYSTEM AND METHOD FOR DYNAMICALLY RECOGNIZING INDIVIDUAL ITEMS IN IMAGES CONTAINED IN VIDEO SOURCE CONTENT
US9723335B2|2017-08-01|Serving objects to be inserted to videos and tracking usage statistics thereof
US11228555B2|2022-01-18|Interactive content in a messaging platform
CN102244807A|2011-11-16|Microsoft Corporation
EP3010238A2|2016-04-20|Method of providing information and electronic device implementing the same
US10152724B2|2018-12-11|Technology of assisting context based service
US20130081073A1|2013-03-28|Method and apparatus for providing and obtaining reward service linked with media contents
同族专利:
公开号 | 公开日
EP2652638B1|2020-07-29|
US20180288448A1|2018-10-04|
CA2934284A1|2012-07-26|
US20160234538A1|2016-08-11|
CA2824329C|2016-11-01|
KR101976656B1|2019-05-09|
US20170034550A1|2017-02-02|
CA2934284C|2020-08-25|
US9843824B2|2017-12-12|
KR101839927B1|2018-03-19|
BR112013018142A2|2016-11-08|
KR101785601B1|2017-10-16|
KR20170116168A|2017-10-18|
US10136168B2|2018-11-20|
KR20160013249A|2016-02-03|
US20190089998A1|2019-03-21|
EP2652638A4|2014-05-07|
US9681202B2|2017-06-13|
US20190387261A1|2019-12-19|
KR102114701B1|2020-05-25|
US9503762B2|2016-11-22|
US20210044843A1|2021-02-11|
US10779019B2|2020-09-15|
WO2012099954A1|2012-07-26|
US9167304B2|2015-10-20|
KR20190050864A|2019-05-13|
US9344774B2|2016-05-17|
KR102271191B1|2021-06-30|
CA2824329A1|2012-07-26|
KR101590494B1|2016-02-01|
US20180070116A1|2018-03-08|
KR20130120514A|2013-11-04|
KR101901842B1|2018-09-27|
KR20180028551A|2018-03-16|
US10003833B2|2018-06-19|
US20160073139A1|2016-03-10|
US20150382078A1|2015-12-31|
US10327016B2|2019-06-18|
KR20200066361A|2020-06-09|
US20120183229A1|2012-07-19|
EP2652638A1|2013-10-23|
KR20180105269A|2018-09-27|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US6064979A|1996-10-25|2000-05-16|Ipf, Inc.|Method of and system for finding and serving consumer product related information over the internet using manufacturer identification numbers|
US6573907B1|1997-07-03|2003-06-03|Obvious Technology|Network distribution and management of interactive video and multi-media containers|
US6850901B1|1999-12-17|2005-02-01|World Theatre, Inc.|System and method permitting customers to order products from multiple participating merchants|
JP4501209B2|2000-03-08|2010-07-14|ソニー株式会社|Information processing apparatus, information processing method, and remote control commander|
US8255291B1|2000-08-18|2012-08-28|Tensilrus Capital Nv Llc|System, method and apparatus for interactive and comparative shopping|
US20020128999A1|2000-09-25|2002-09-12|Fuisz Richard C.|Method, apparatus and system for providing access to product data|
US7016532B2|2000-11-06|2006-03-21|Evryx Technologies|Image capture and identification system and process|
US7680324B2|2000-11-06|2010-03-16|Evryx Technologies, Inc.|Use of image-derived information as search criteria for internet and other search engines|
US8224078B2|2000-11-06|2012-07-17|Nant Holdings Ip, Llc|Image capture and identification system and process|
US7207057B1|2000-11-16|2007-04-17|Rowe Lynn T|System and method for collaborative, peer-to-peer creation, management & synchronous, multi-platform distribution of profile-specified media objects|
US20020198789A1|2001-06-22|2002-12-26|Sony Corp. And Sony Music Entertainment, Inc.|Apparatus and method for identifying and purchasing music|
US6968337B2|2001-07-10|2005-11-22|Audible Magic Corporation|Method and apparatus for identifying an unknown work|
US7797204B2|2001-12-08|2010-09-14|Balent Bruce F|Distributed personal automation and shopping method, apparatus, and process|
US7929958B2|2003-02-22|2011-04-19|Julian Van Erlach|Methods, systems, and apparatus for providing enhanced telecommunication services|
US7346554B2|2002-05-01|2008-03-18|Matsushita Electric Industrial Co., Ltd.|Online shopping system, information processing apparatus and method, and information processing program recording medium|
AU2003239385A1|2002-05-10|2003-11-11|Richard R. Reisman|Method and apparatus for browsing using multiple coordinated device|
JP4019063B2|2003-04-18|2007-12-05|光雄 中山|Optical terminal device, image processing method and system|
US7072669B1|2003-05-23|2006-07-04|Verizon Corporate Services Group Inc.|Method for localizing the position of a wireless device|
US7596513B2|2003-10-31|2009-09-29|Intuit Inc.|Internet enhanced local shopping system and method|
WO2010105244A2|2009-03-12|2010-09-16|Exbiblio B.V.|Performing actions based on capturing information from rendered documents, such as documents under copyright|
US7751805B2|2004-02-20|2010-07-06|Google Inc.|Mobile image-based information retrieval system|
US7707239B2|2004-11-01|2010-04-27|Scenera Technologies, Llc|Using local networks for location information and image tagging|
EP1952265A2|2005-10-03|2008-08-06|Teletech Holdings Inc.|Virtual retail assistant|
GB2431793B|2005-10-31|2011-04-27|Sony Uk Ltd|Image processing|
JP2007172109A|2005-12-20|2007-07-05|Sanden Corp|Reader/writer unit for automatic vending machine|
US20100138875A1|2007-11-30|2010-06-03|Johnson Gerard C|Method and system for improved interactive television processing|
EP2097862A4|2006-12-01|2011-11-09|Hsni Llc|Method and system for improved interactive television processing|
US7921071B2|2007-11-16|2011-04-05|Amazon Technologies, Inc.|Processes for improving the utility of personalized recommendations generated by a recommendation engine|
US8055688B2|2007-12-07|2011-11-08|Patrick Giblin|Method and system for meta-tagging media content and distribution|
US20090319388A1|2008-06-20|2009-12-24|Jian Yuan|Image Capture for Purchases|
US8412625B2|2008-08-25|2013-04-02|Bruno Pilo' & Associates, Llc|System and methods for a multi-channel payment platform|
US7756758B2|2008-12-08|2010-07-13|Hsn Lp|Method and system for improved E-commerce shopping|
US9195898B2|2009-04-14|2015-11-24|Qualcomm Incorporated|Systems and methods for image recognition using mobile devices|
US8146799B2|2009-05-06|2012-04-03|General Mills, Inc.|Product information systems and methods|
US8311337B2|2010-06-15|2012-11-13|Cyberlink Corp.|Systems and methods for organizing and accessing feature vectors in digital images|
US8780130B2|2010-11-30|2014-07-15|Sitting Man, Llc|Methods, systems, and computer program products for binding attributes between visual components|
CN102741875B|2010-11-30|2017-10-17|松下电器(美国)知识产权公司|Content management device, contents management method, content supervisor and integrated circuit|
KR102114701B1|2011-01-18|2020-05-25|에이치에스엔아이 엘엘씨|System and method for recognition of items in media data and delivery of information related thereto|
US10628835B2|2011-10-11|2020-04-21|Consumeron, Llc|System and method for remote acquisition and deliver of goods|
US9047633B2|2012-02-07|2015-06-02|Zencolor Corporation|System and method for identifying, searching and matching products based on color|
US9699485B2|2012-08-31|2017-07-04|Facebook, Inc.|Sharing television and video programming through social networking|
US10037582B2|2013-08-08|2018-07-31|Walmart Apollo, Llc|Personal merchandise cataloguing system with item tracking and social network functionality|
US9818105B2|2013-10-29|2017-11-14|Elwha Llc|Guaranty provisioning via wireless service purveyance|
US9924215B2|2014-01-09|2018-03-20|Hsni, Llc|Digital media content management system and method|
CN109906455A|2016-09-08|2019-06-18|Aiq私人股份有限公司|Object detection in visual search query|US7565008B2|2000-11-06|2009-07-21|Evryx Technologies, Inc.|Data capture and identification system and process|
US7680324B2|2000-11-06|2010-03-16|Evryx Technologies, Inc.|Use of image-derived information as search criteria for internet and other search engines|
US7899243B2|2000-11-06|2011-03-01|Evryx Technologies, Inc.|Image capture and identification system and process|
US8224078B2|2000-11-06|2012-07-17|Nant Holdings Ip, Llc|Image capture and identification system and process|
KR102114701B1|2011-01-18|2020-05-25|에이치에스엔아이 엘엘씨|System and method for recognition of items in media data and delivery of information related thereto|
US9210208B2|2011-06-21|2015-12-08|The Nielsen Company , Llc|Monitoring streaming media content|
US9077696B2|2012-04-26|2015-07-07|Qualcomm Incorporated|Transferring data items amongst computing devices using metadata that identifies a location of a transferred item|
WO2015007910A1|2013-07-19|2015-01-22|Koninklijke Philips N.V.|Hdr metadata transport|
US20150120473A1|2013-10-29|2015-04-30|Elwha LLC, a limited liability corporation of the State of Delaware|Vendor-facilitated guaranty provisioning|
US9934498B2|2013-10-29|2018-04-03|Elwha Llc|Facilitating guaranty provisioning for an exchange|
US9818105B2|2013-10-29|2017-11-14|Elwha Llc|Guaranty provisioning via wireless service purveyance|
US10157407B2|2013-10-29|2018-12-18|Elwha Llc|Financier-facilitated guaranty provisioning|
US9924215B2|2014-01-09|2018-03-20|Hsni, Llc|Digital media content management system and method|
US9747727B2|2014-03-11|2017-08-29|Amazon Technologies, Inc.|Object customization and accessorization in video content|
KR20150107464A|2014-03-14|2015-09-23|삼성전자주식회사|Apparatus for processing contents and method for providing event thereof|
US20170358023A1|2014-11-03|2017-12-14|Dibzit.Com, Inc.|System and method for identifying and using objects in video|
US10970843B1|2015-06-24|2021-04-06|Amazon Technologies, Inc.|Generating interactive content using a media universe database|
US10531162B2|2015-11-04|2020-01-07|Cj Enm Co., Ltd.|Real-time integrated data mapping device and method for product coordinates tracking data in image content of multi-users|
US11134316B1|2016-12-28|2021-09-28|Shopsee, Inc.|Integrated shopping within long-form entertainment|
US11038111B2|2017-11-28|2021-06-15|Samsung Display Co., Ltd.|Organic electroluminescence device and monoamine compound for organic electroluminescence device|
US20200026869A1|2018-07-17|2020-01-23|Vidit, LLC|Systems and methods for identification of a marker in a graphical object|
WO2021137343A1|2020-01-03|2021-07-08|엘지전자 주식회사|Display apparatus and display system|
法律状态:
2018-12-18| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-10-22| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-12-29| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]|
2021-04-06| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-06-15| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 18/01/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US201161433755P| true| 2011-01-18|2011-01-18|
US61/433,755|2011-01-18|
PCT/US2012/021710|WO2012099954A1|2011-01-18|2012-01-18|System and method for recognition of items in media data and delivery of information related thereto|
[返回顶部]